Learning a Fuzzy Hyperplane Fat Margin Classifier with Minimum VC dimension

نویسندگان

  • Jayadeva
  • Sanjit S. Batra
  • Siddharth Sabharwal
چکیده

The Vapnik-Chervonenkis (VC) dimension measures the complexity of a learning machine, and a low VC dimension leads to good generalization. The recently proposed Minimal Complexity Machine (MCM) learns a hyperplane classifier by minimizing an exact bound on the VC dimension. This paper extends the MCM classifier to the fuzzy domain. The use of a fuzzy membership is known to reduce the effect of outliers, and to reduce the effect of noise on learning. Experimental results show, that on a number of benchmark datasets, the the fuzzy MCM classifier outperforms SVMs and the conventional MCM in terms of generalization, and that the fuzzy MCM uses fewer support vectors. On several benchmark datasets, the fuzzy MCM classifier yields excellent test set accuracies while using one-tenth the number of support vectors used by SVMs.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning a hyperplane classifier by minimizing an exact bound on the VC dimension

The VC dimension measures the complexity of a learning machine, and a low VC dimension leads to good generalization. While SVMs produce state-of-the-art learning performance, it is well known that the VC dimension of a SVM can be unbounded; despite good results in practice, there is no guarantee of good generalization. In this paper, we show how to learn a hyperplane classifier by minimizing an...

متن کامل

Learning with Spectral Kernels and Heavy-Tailed Data

Heavy-tailed data, e.g., graphs in which the degree sequence decays according to a power law, are ubiquitous in applications. In many of those applications, spectral kernels, e.g., Laplacian Eigenmaps and Diffusion Maps, are commonly-used analytic tools. We establish learnability results applicable in both settings. Our first result is an exact learning bound for learning a classification hyper...

متن کامل

Feature Selection through Minimization of the VC dimension

Feature selection involes identifying the most relevant subset of input features, with a view to improving generalization of predictive models by reducing overfitting. Directly searching for the most relevant combination of attributes is NP-hard. Variable selection is of critical importance in many applications, such as micro-array data analysis, where selecting a small number of discriminative...

متن کامل

Dimensionality Dependent PAC-Bayes Margin Bound

Margin is one of the most important concepts in machine learning. Previous margin bounds, both for SVM and for boosting, are dimensionality independent. A major advantage of this dimensionality independency is that it can explain the excellent performance of SVM whose feature spaces are often of high or infinite dimension. In this paper we address the problem whether such dimensionality indepen...

متن کامل

A Neurodynamical System for finding a Minimal VC Dimension Classifier

The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an exact bound on the Vapnik-Chervonenkis (VC) dimension. The VC dimension measures the capacity of a learning machine, and a smaller VC dimension leads to improved generalization. On many benchmark datasets, the MCM generalizes better than SVMs and uses far fewer support vectors than the number u...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1501.02432  شماره 

صفحات  -

تاریخ انتشار 2015